69 research outputs found
CanvasGAN: A simple baseline for text to image generation by incrementally patching a canvas
We propose a new recurrent generative model for generating images from text
captions while attending on specific parts of text captions. Our model creates
images by incrementally adding patches on a "canvas" while attending on words
from text caption at each timestep. Finally, the canvas is passed through an
upscaling network to generate images. We also introduce a new method for
generating visual-semantic sentence embeddings based on self-attention over
text. We compare our model's generated images with those generated Reed et.
al.'s model and show that our model is a stronger baseline for text to image
generation tasks.Comment: CVC 201
Generative Adversarial Text to Image Synthesis
Automatic synthesis of realistic images from text would be interesting and
useful, but current AI systems are still far from this goal. However, in recent
years generic and powerful recurrent neural network architectures have been
developed to learn discriminative text feature representations. Meanwhile, deep
convolutional generative adversarial networks (GANs) have begun to generate
highly compelling images of specific categories, such as faces, album covers,
and room interiors. In this work, we develop a novel deep architecture and GAN
formulation to effectively bridge these advances in text and image model- ing,
translating visual concepts from characters to pixels. We demonstrate the
capability of our model to generate plausible images of birds and flowers from
detailed text descriptions.Comment: ICML 201
Parsing is All You Need for Accurate Gait Recognition in the Wild
Binary silhouettes and keypoint-based skeletons have dominated human gait
recognition studies for decades since they are easy to extract from video
frames. Despite their success in gait recognition for in-the-lab environments,
they usually fail in real-world scenarios due to their low information entropy
for gait representations. To achieve accurate gait recognition in the wild,
this paper presents a novel gait representation, named Gait Parsing Sequence
(GPS). GPSs are sequences of fine-grained human segmentation, i.e., human
parsing, extracted from video frames, so they have much higher information
entropy to encode the shapes and dynamics of fine-grained human parts during
walking. Moreover, to effectively explore the capability of the GPS
representation, we propose a novel human parsing-based gait recognition
framework, named ParsingGait. ParsingGait contains a Convolutional Neural
Network (CNN)-based backbone and two light-weighted heads. The first head
extracts global semantic features from GPSs, while the other one learns mutual
information of part-level features through Graph Convolutional Networks to
model the detailed dynamics of human walking. Furthermore, due to the lack of
suitable datasets, we build the first parsing-based dataset for gait
recognition in the wild, named Gait3D-Parsing, by extending the large-scale and
challenging Gait3D dataset. Based on Gait3D-Parsing, we comprehensively
evaluate our method and existing gait recognition methods. The experimental
results show a significant improvement in accuracy brought by the GPS
representation and the superiority of ParsingGait. The code and dataset are
available at https://gait3d.github.io/gait3d-parsing-hp .Comment: 16 pages, 14 figures, ACM MM 2023 accepted, project page:
https://gait3d.github.io/gait3d-parsing-h
- …